![]() streaming with video orientation coordination (cvo)
专利摘要:
CONTINUOUS TRANSMISSION WITH VIDEO ORIENTATION COORDINATION (CVO)It is a technology to provide continuous streaming with video orientation coordination (CVO). In one example, a server may include a computer circuitry configured to: receive a device capability for a client; and modify content that is continuously streamed to the customer based on an inclusion of a CVO attribute in the device capability. 公开号:BR112015006675A2 申请号:R112015006675-5 申请日:2013-10-28 公开日:2020-08-18 发明作者:Ozgur Oyman 申请人:Intel Corporation; IPC主号:
专利说明:
[0001] [0001] This application claims the benefit of and hereby incorporates, by way of reference, Provisional Patent Application Serial No. US 61 / 719.241, filed on October 26, 2012, with a lawyer dossier number P50328Z . This application claims the benefit of and hereby incorporates, by way of reference, Provisional Patent Application Serial No. U.S. 61 / 753,914, filed on January 17, 2013, with a lawyer dossier number P53504Z. The present application claims the benefit of and hereby incorporates, by way of reference, Provisional Patent Application Serial No. U.S. 61 / 841.230, filed on May 28, 2013, with a lawyer dossier number P574602Z. HISTORIC [0002] [0002] Wireless mobile communication technology uses several standards and protocols to transmit data between a node (for example, a transmitting station) and a wireless device (for example, a mobile device). Some wireless devices communicate with the use of orthogonal split multiple access (OFDMA) in a downlink (DL) transmission and single carrier split multiple access (SC-FDMA) in an uplink transmission (UL). Standards and protocols that use frequency division multiplexing (OFDM) for signal transmission include the long-term evolution (LTE) of a third generation partnership project (3GPP), the 802.16 standard of the Institute of Electronic Engineers and Electricians (IEEE) (for example, 802.16e, 802.16m), which is commonly known to industry groups as WiMAX (Worldwide Interoperability for Microwave Access) and the IEEE 802.11 standard, which is commonly known to industry groups as WiFi. [0003] [0003] In 3GPP radio access network (RAN) LTE systems, the node may be a combination of Evolved Universal Terrestrial Radio Access Network Nodes (E-UTRAN) (also - commonly known as B-Nodes advanced B-nodes, eNodeBs, or eNBs) and Radio Network Controllers (RNCS), which communicate with the wireless device, known as a user equipment (UE). Downlink (DL) transmission can be communication from the node (for example, eNodeB) to the wireless device (for example, UE) and the uplink transmission (UL) can be a communication from the wireless device to the knot. [0004] [0004] The wireless device can be used to receive multimedia delivery of Internet video with the use of several protocols, such as continuous transmission of HTTP protocol (HTTP). A protocol for providing HTTP-based delivery of streaming video can include dynamic adaptive streaming over HTTP (DASH). BRIEF DESCRIPTION OF THE DRAWINGS [0005] [0005] Features and benefits of disclosure will be apparent from the detailed description that follows, taken in conjunction with the accompanying drawings, which together illustrate, by way of example, disclosure assets; and, where: [0006] [0006] Figure 1 illustrates a diagram of functional components in a packet switched continuous transmission service (PSS) capacity exchange according to an example; [0007] [0007] Figure 2 illustrates an exemplary dynamic adaptive streaming by continuous streaming based on HTTP protocol (HTTP) (based on DASH) with video orientation coordination (CVO) according to an example; [0008] [0008] Figure 3 illustrates a continuous transmission based on a real time continuous transmission protocol (based on RTSP) with video orientation coordination (CVO) according to an example; [0009] [0009] Figure 4 illustrates an example of video orientation coordination (CVO) information embedded in a third generation partnership project (3GPP) file format (3GP) file using a video format installation. media file (ISO-BMFF) based on the International Organization for Standardization (based on ISO) according to an example; [0010] [0010] Figure 5 illustrates a hierarchy of file format box structure of the International Organization for Standardization (ISO) according to an example; [0011] [0011] Figure 6 illustrates an exemplary server-client interaction with the use of device-aware multimedia adaptation based on receiving content with video guidance information coordination (CVO) information incorporated according to an example. ; [0012] [0012] Figure 7 (ie Table 3) illustrates an extensible markup language syntax table (XML syntax) of common group and representation and | attributes and elements according to an example; [0013] [0013] Figure 8 illustrates a block diagram of a media presentation description (MPD) file configuration according to an example; [0014] [0014] Figure 9 depicts a flow chart to signal the ability to coordinate video guidance (CVO) from a mobile terminal (MT) on a server according to an example; [0015] [0015] Figure 10 depicts the computer circuitry functionality of an operable server to provide continuous streaming "with video orientation coordination (CVO) according to an example; [0016] [0016] Figure 11 illustrates a diagram of a server, a node and a user equipment (UE) according to an example; and [0017] [0017] Figure 12 illustrates a diagram of a wireless device (for example, UE) according to an example. [0018] [0018] Reference will now be made to the illustrated exemplary modalities and specific language will be used in this document to describe them. It will, however, be understood that no limitation on the scope of the invention is therefore intended. DETAILED DESCRIPTION | [0019] [0019] Before the present invention is revealed and described, it should be understood that that invention is not limited to the particular structures, process steps, or materials disclosed in this document, but is extended to equivalents thereof as serial recognized by those of common skill in the relevant techniques. It should be understood that the terminology used in this document is used for the purpose of describing particular examples only and is not intended to be limiting. The same numerical references in different figures represent the same element. Numbers provided in flowcharts and processes are provided for clarity in illustrating steps and operations and do not necessarily indicate a particular order or sequence. EXEMPLIFICATIVE MODALITIES [0020] [0020] An initial overview of technology modalities is provided below and then specific technology modalities are described in more detail later. This initial summary is intended to assist readers in understanding the technology more quickly, but it is not intended to identify key features or essential features of the technology, nor is it intended to limit the scope of the claimed matter. [0021] [0021] The growth of multimedia services, including continuous transmission and conventional services, is one of the drivers of the evolution towards new technologies and mobile broadband standards. With a high consumer demand for multimedia services coupled with advances in media compression and wireless network infrastructure, advancing the multimedia service capabilities of mobile and cellular broadband systems is desirable, where multimedia service capabilities can be used to deliver a high quality of experience (Qo0E) to consumers that ensures ubiquitous access to video content and services from any location, at any time, with any device and technology. Support for multiple mobile devices and provision of media handling procedures and session management protocols optimized for various device classes and capabilities can be used to enable delivery of multimedia content with high Qo0E in an ubiquitous way. [0022] [0022] With the introduction of orientation sensors on mobile devices used in real-time video communication, the display content can be rotated to be aligned with the orientation of the device. For example, the orientation sensor can detect the orientation of the device by measuring the gravity field. Other types of orientation sensors can also be used. The device orientation can then be used in applications to adjust device functions according to the orientation. For example, the device can rotate the user interface or video orientation to either portrait or landscape mode based on the device's orientation. [0023] [0023] Due to the fact that some client devices contain an orientation sensor, the content or service provider may provide different encoded versions of: content optimized for different device orientations or the content / service provider may capture or transcode the content while capturing content (for example, dynamically) to provide an optimized experience. Signaling from the user equipment (UE) to the network of the orientation sensor and / or orientation capabilities of the current device can provide opportunities to tailor content on the network side to provide a high quality customer experience. The device based on multimedia adaptation and / or video orientation adaptation can apply to both two-dimensional (2D) and three-dimensional (3D) video applications. For a 2D video example, different portrait or landscape views and / or viewing angles can be adapted based on the orientation of the device. For a 3D video example, the different viewing angles and depth information can be adapted based on the orientation of the device. [0024] [0024] Capacity change signaling may be an important feature in packet-switched continuous transmission service (PSS) of the third generation partnership project (3GPP) (as described in technical specification (TS) 26.234 of 3GPP V11.1.0 ( 2012-09)), dynamic adaptive streaming over HTTP (DASH) (as described in TS 26.247 of 3GPP V11.0.0 (2012-09)) and PSS based on integrated multimedia subsystem (IMS) and broadcast and broadcast service selective multimedia (MBMS) abbreviated as IMS PSS MBMS, (as described in TS 26.237 of 3GPP V11.0.0 (2012-06)). Capacity switching enables PSS, DASH and IMS PSS MBMS servers to provide a wide range of devices with content suitable for the particular device in question based on knowledge of specific capabilities of the mobile terminal. [0025] [0025] During the establishment of a streaming session, a server can use the device capability description to provide the mobile terminal with the correct type of multimedia content. Servers can use information about the capabilities of the mobile terminal to determine which streaming to provide to the connecting terminal (for example, mobile terminal). For example, the server can compare the requirements on the mobile terminal for multiple available variations of a continuous transmission with the actual capabilities of the connecting terminal to determine a continuous transmission best suited for that particular mobile terminal. Capacity switching also allows you to deliver an optimized session description protocol (SDP) file to a client terminal (for example, mobile terminal) for a PSS or IMS PSS MBMS session, or a presentation description crowd file. media (MPD) optimized for the client terminal for a DASH session. [0026] [0026] Figure 1 illustrates an example of exchanging capacity for PSS services. In one illustration, mobile terminal 110 (or client device or client terminal) can inform the PSS server 130 about the identity of the MT so that the PSS server can retrieve a device capacity profile from a profile server device 120, which can store device capacity profile 122 for the mobile terminal. The MT can send an HTTP and / or RTSP request to the PSS 170 server. The HTTP and / or RTSP request can include a uniform resource locator (URL) descriptor (URLdesc) and / or a profile difference header ( profileDiff). The PSS server can send an HTTP request to the device profile server for a device capacity profile for the MT 160. The device profile server can send an HTTP response to the PSS server with device capacity profile for MT 162. The PSS server can combine or copy device capacity profile 132. The PSS server can send HTTP and / or RTSP 172 responses and media content 174 to MT based on the device capacity profile for MT. In one example, a terminal manufacturer or software vendor can maintain a device profile server to provide device capacity profiles for the manufacturer's or vendor's products. In another example, an operator can manage a device profile server for the operator's subscribers, which can allow the subscriber to make user-specific updates to the subscribers' profiles. The device profile server can provide device capacity profiles to the PSS server upon request. [0027] [0027] The technology (for example, servers, devices or client terminals, mobile terminals, methods, computer circuitry and systems) as described in this document can provide the mobile terminal's video orientation coordination (CVO) capability or client device. Different streaming paradigms (for example, PSS, DASH and IMS PSS MBMS), can use different multimedia adaptation methods and processes, which is explained in more detail below. [0028] [0028] A service can use a pull-based continuous transmission process or a push-based continuous transmission process. DASH provides an example of pull-based continuous transmission. For a DASH session, an HTTP 230 server provides different CVO-optimized content to a DASH 220 client, as shown in Figure 2. The HTTP server can use device capacity switching signal from the DASH client that describes the various supported CVO 240 states. The set of CVOs and corresponding content information can be flagged to the DASH client in the media presentation description (MPD) 242 file with different encoded content for different CVOs, whose interaction between server and client is depicted in Figure 2 The DASH client breeder can then follow the current CVO and request the corresponding versions of the content optimized for the current CVO. [0030] [0030] Additional attributes can be added in PSS vocabulary device capacity change signaling. For example, attributes “StreamingCVOCapable”, “StreamingHighGranularityCVOCapable”, “ThreeGPCVOCapable” and “ThreeGPHighGranularityCVOCapable” (or attributes with similar functionality) can be included or added to the component of ThreeGPFileFormat and Streaming the base PSS vocabulary in TS 26.234 which describes the specification 3GSP and TS 26.244 V11.1.0 (2012-09) PSS streaming transmission that describes the 3GPP file format specification. Attributes can have a name, definition, a component, a type, legal values (or valid options) and a resolution rule. A possible syntax for these additional attributes can be as follows: [0031] [0031] Attribute name: StreamingCVOCapable [0032] [0032] Attribute definition: Indicates whether the client is a CVO capable receiver of continuous RTP transmissions, that is, as long as the video guidance information for the delivered content is communicated to the client in an RTP extension header (which corresponds to "urn: 3gpp: video-orientation"), the client can interpret the video orientation and align the video correctly for rendering / display purposes. If this attribute is reported and the StreamingHighGranularityCVOCapable attribute is reported as "Yes", then the value of this attribute is "Yes". [0033] [0033] Component: Continuous transmission [0034] [0034] Type: Literal [0035] [0035] Legal values: "Yes", "No" [0036] [0036] Resolution rule: Blocked [0037] [0037] Attribute name: StreamingHighGranularityCVOCapable [0038] [0038] Attribute definition: Indicates whether the client is a receiver capable of Higher Granularity CVO of continuous RTP transmissions, that is, as long as the video orientation information for the delivered content is communicated to the client in an extension header RTP (which corresponds to "urn: 3GPP: video-orientation: 6"), the client can interpret the video orientation and align the video correctly for rendering / display purposes. [0039] [0039] Component: Continuous transmission [0040] [0040] Type: Literal [0041] [0041] Legal values: "Yes", "No" [0042] [0042] Resolution rule: Blocked [0043] [0043] Attribute name: ThreeGPCVOCapable [0044] [0044] Attribute definition: Indicates whether the client is a CVO capable receiver of 3GP files, that is, provided that the video orientation information (which corresponds to "urn: 3gpp: video-orientation") of the delivered content is communicated to the customer in a 3GP file, the customer can interpret the video orientation and align the video correctly for rendering / display purposes. If this attribute is reported and the ThreeGPHighGranularityCVOCapable attribute is reported as a "Yes", then the value of that attribute for being a "Yes". [0045] [0045] Component: ThreeGPFileFormat [0046] [0046] Type: Literal [0047] [0047] Legal values: "Yes", "No" [0048] [0048] Resolution rule: Blocked [0049] [0049] Attribute name: ThreeGPHighGranularityCVOCapable [0050] [0050] Attribute definition: Indicates if the client is a receiver capable of Higher Grain CVO of 3GP files, that is, as long as the video orientation information (which correspond to "urn: 3gpp: video-orientation: 6 ") of the delivered content are communicated to the customer in a 3GP file, the customer can interpret the video orientation and align the video correctly for rendering / display purposes. [0051] [0051] Component: ThreeGPFileFormat [0052] [0052] Type: Literal [0053] [0053] Legal values: "Yes", "No" [0054] [0054] Resolution rule: Blocked [0055] [0055] The technology described in this document can also incorporate CVO information into the captured content, such as a 3GP file. Incorporate CVO information into a 3GP file (as described in technical specification (TS) 26.244 of 3GPP V11.1.0 [0056] [0056] In another example, a service specification may support orientation aware streaming. [0057] [0057] A PSS client capable of CVO can rotate the video to compensate for the rotation for CVO and Higher Grain CVO. When both rotation and alternation are compensated, operations can be performed in the order LSB to MSB (that is, rotation compensation first and then alternation compensation). [0058] [0058] A PSS server capable of CVO can add the load bytes to a last RTP packet in each group of packets that make up a main frame (for example, interframe (Frame I) or instant decoding restoration (IDR) in one H.264). The PSS server can also add the load bytes to the last RTP packet in each group of packets that make up another type of frame (for example, a predicted frame (Frame P)) [0059] [0059] An interframe (Frame IL) is a frame in a continuous transmission of video compression that can be expressed in terms of one or more neighboring frames. An IDR access unit can contain an intraimage (that is, an encoded image that can be decoded without decoding any previous images in a Network Abstraction Layer (NAL) unit continuous stream and the presence of an IDR access unit. IDR can indicate that no image in the continuous stream will use reference to images prior to the intraimage that the IDR access unit contains in order to be decoded). The H.264 / group of experts in motion picture-4 (MPEG-4) (H.264 / MPEG-4) 10 or advanced video encoding (AVC) is a video compression format, which can be used for recording, compressing and distributing high definition video. A P Frame can be a type of I Frame to define images predicted in advance. The forecast can be made from an earlier image, especially a Table I, for less data coding. [0060] [0060] If a CVO RTP header extension is the only header extension present, a total of 8 bytes can be appended to the RTP header and the last packet in the sequence of RTP packets can be marked with both a marker bit and a bit extension. [0061] [0061] If CVO information is signaled in the RTP header extension, oThe PSS server can signal CVO information in the SDP by including an a = extmap attribute that indicates a uniform CVO resource name (URN) under a scope of relevant media line. the CVOo URN can be represented as: "urn: 3gpp: video-orientation". An exemplary use of a URN to signal CVO in relation to a media line is as follows: "“ a = zextmap: 7 "urn: 3gpp: video-orientation" ”“. The number 7 in the CVO URN example can be replaced with any number in a range from it [0062] [0062] If Higher Grained CVO information is signaled in the RTP header extension, oThe PSS server can signal the Higher Grained CVO information in the SDP in a similar way to the CVO URN, where a CVO URN Higher Granularity can be represented as: "urn: 3gpp: video-orientation: 6". An exemplary use of an URN to signal CVO in relation to a media line is as follows: “a = extmap: 5" urn: 3gpp: video-orientation: 6 "". [0063] [0063] Accordingly, in one example, the inclusion of CVO information can be registered within a 3GP file format as a continuous recording of a series of orientation values of the captured images. A box can be defined in the ISO-based media file format (ISO-BMFF) or the 3GPP file format for timed CVO information. For example, the 3GPP file format (3GP) RTP video track or hint track can be used to incorporate guidance information. For DASH-formatted content, CVO information can be carried within a file-level ISO-BMFF box, such as in a boot segment (for example, in an ISO-BMFF “moov” box) or in media segments (for example, in an ISO-BMFF “moov” box), as shown in Figure 4. In another example, the ISO-BMFF timed multi-media range can be chosen as the range to contain the CVO information. For example, a new box can be created specifically for CVO, such as a CVOSampleEntry with a description of the CVO parameters. Other boxes within the sample description box 350 in the ISO file format box structure hierarchy, as shown in Figure 5, can also be used to contain the CVO information. The ISO file format box structure hierarchy can include a film box 340, a banner box 342, a media box 344, a media information box 346, a sample table box 348, and the box sample description, where higher order boxes are listed first. The sample description box can have one or more sample entries, such as MP4VisualSampleEntry, AVCSampleEntry, HintSampleEntry, or CVOSampleEntry. [0064] [0064] In another example, video guidance information (for example, CVO) can be included in a 3GP file format, which can be included in TS [0065] [0065] In one example, a box type of the CVOSampleEntry Box may have a codec attribute set to “3gvo”. The CVOSampleEntry Box can be defined as follows: [0066] [0066] CVOSampleEntry :: = BoxHeader [0067] [0067] Reserved 6 [0068] [0068] Data reference index [0069] [0069] Granularity [0070] [0070] CVOSampleEntry Box fields can be defined by Table 1. CVOSampleEntry Box fields can include a name, type, details and a field value. [0071] [0071] The technology described provides streaming or downloading of content with oriented video components (for example, CVO components). Device orientation aware multimedia adaptations provide streaming or downloading of previously captured and loaded content with oriented video components. For example, as part of a PSS download or MBMS download application, [0072] [0072] For example, if the server determines that the content was captured by an orientation aware terminal (for example, by inspecting the content file based on 3GP), while the receiving client device is not aware of orientation ( for example, known based on PSS device capability signaling mechanisms), the server can process (for example, apply transcoding to) the content to correct and avoid misalignment problems during future rendering and display. If the receiving client device is aware of the orientation, then the server may not have to correct the misalignment, but instead may choose to send the content "as is" (that is, without modification) along with embedded video guidance information. in the content (for example, in an RTP extension header for RTP streaming or within a 3GP file for HTTP and DASH streaming) so that the receiving client device can correct the misalignment. [0073] [0073] Figure 6 illustrates an exemplary server-client interaction that performs guidance-aware media delivery based on receiving content with embedded CVO information. For example, a capture client 224 can capture CVO information with the capture and load the content (for example, 3GP file) with video guidance information embedded 282 on the 232 server. In another example, a rendering client 226 can signal a device profile for the server with CVO 280 capacity information. The server performs content-aware orientation, selection, transcoding and / or formats conversations to correct potential misalignment and optimize content delivery in relation to the (wrt) capabilities of Rendering client CVO 284, as previously described. The server can then deliver content tailored to the rendering client as part of a streaming or 286 download service. [0074] [0074] In another example, consistent with an interaction between server and client as illustrated in Figure 6, a DASH server can learn from the client (for example, rendering client 226) that the client has the ability to process CVO information and correct misalignment. The server can then designate in an MPD in the presence of CVO information in DASH representations (that is, if DASH segments are stored content, the DASH server may have to detect the presence of CVO information before the server DASH can determine by analyzing the 3GP files that correspond to the DASH segments and can check whether video orientation information is indicated in the crowd range). The DASH customer upon receipt of the MPD can then activate a video guidance mechanism to process the CVO information | flagged (ie analysis of 3GP files that correspond to DASH representations) in DASH segments and correct any misalignments and can render / display video with correct alignment. [0075] [0075] Also consistent with server-client interaction as illustrated in Figure 6, a DASH server can learn from the client that the client does not have the ability to process CVO information and correct misalignment. The server can detect the presence of CVO information in the content since the MPD can indicate the CVO information. In response, the DASH server can process the 3GP files that correspond to the DASH representations offered at MPD in order to correct any misalignments and can send the requested content to the DASH client after this processing. [0076] [0076] In another configuration, an indication of CVO information can be implanted in a DASH MPD based on TS 26.247. For example, a CVO indication attribute (for example, cvo granularity) can be included in the MPD, where the MPD can have common attributes and elements. AdaptationSet, Representation and SubRepresentation elements can have common attributes and elements assigned, such as the CVO indication attribute. [0077] [0077] The extensible markup language syntax (XML syntax) for the CVO indication attribute (for example, cvo granularity) can be as shown in Table 3 illustrated in Figure 7. [0078] [0078] As previously discussed, DASH is a standardized HTTP continuous transmission protocol. As shown in Figure 8, DASH can specify different formats for a media presentation description (MPD) 402 file that provides information about the different structure and versions of the media content representations stored on the server as well as the segment formats . The MPD crowd file contains information about the boot and media segments for a media player (for example, the media player can look at the boot segment to determine a container format and media timing information) to ensure the mapping of segments in a media presentation timeline for switching and synchronous presentation with other representations. DASH technology has also been standardized by other organizations, such as the group of experts in moving image (MPEG), open IP television forum (IPTV) (OIPF) and hybrid broadcast broadband TV (HbbTV). [0079] [0079] A DASH client can receive multimedia content by downloading segments via a series of HTTP request-response transactions. DASH can provide the ability to dynamically switch between different bit rate representations of media content while the bandwidth that is available for a mobile device changes. In this way, DASH can allow rapid adaptation to changing wireless and network link conditions, user preferences and device capabilities such as display resolution, the type of central processing unit (CPU) employed, memory resources available and so on. Dynamic DASH adaptation can provide a better quality of experience (Qo0E) for a user, with shorter initialization delays and less buffering events than other streaming protocols. [0080] [0080] In DASH, multiples of media presentation description (MPD) 402 can provide information on the structure and different versions of media content representations stored on a web / media server. In the example illustrated in Figure 8, the MPD multiples are temporarily divided into 404 periods that have a predetermined length, such as 60 seconds in this example. Each period can include a plurality of 406 adaptation sets. Each adaptation set can provide information about one or more media components with a number of coded alternatives. For example, adaptation set 0 in this example can include a variety of differently encoded audio alternatives, such as different bit rates, mono, stereo, surround system, CVO and so on. In addition to offering different audio quality for a multimedia presentation by period ID, the adaptation set can also include audio in different languages. The “different alternatives offered in the adaptation set are called 408 representations. [0081] [0081] In Figure 8, Adaptation Set 1 is illustrated by offering video at different bit rates, such as 5 per second (Mbps), 2 Mbps, 500 kilo-bits per second (kbps), or a functionality. The functionality can be used to search, advance, [0082] [0082] The multimedia in the adaptation set can be further divided into smaller segments. In the example in Figure 8, the 60-second video segment of adaptation set 1 is further divided into four sub-segments 414 of 15 seconds each. These examples are not intended to be limiting. The actual length of the adaptation set and each media segment or subsegment is dependent on the type of media, system requirements, potential types of interference, and so on. Actual media segments or subsegments can be less than one second in length to several minutes in size. [0083] [0083] Another example provides a 500 method for signaling the video orientation coordination (CVO) capability of a mobile terminal (MT) on a server, as shown in the flowchart in Figure 9. The method can be performed as per instructions on a machine or computer circuitry, where instructions are included in at least one computer-readable medium or a non-transitory machine-readable storage medium. The method includes the operation of receiving a device capacity for a client on the server, as in block 510. The next operation of the method can be identifying when the device capacity includes a CVO attribute, as in block [0084] [0084] In one example, the operation of adapting continuously transmitted content may additionally include: modifying a display orientation of a hypertext transfer protocol (HTTP) streaming, dynamic adaptive streaming over HTTP (DASH), or streaming continuous real-time transport protocol (RTP) for misalignment when the device capability for the customer does not include the CVO attribute that indicates that the customer is not an orientation aware terminal; Or incorporate a CVO referral attribute in a media presentation description (MPD) file or a session description protocol (SDP) file when the device capability for the customer includes the CVO attribute that indicates that the client is an orientation aware terminal to modify the display orientation of a continuous stream. [0085] [0085] In another example, the method may additionally include delivering a media presentation description (MPD) metadata file for dynamic adaptive streaming transmitting continuously via hypertext transfer protocol (HTTP) content ( DASH) to the customer with a CVO referral attribute provided in a common codec or element attribute for an AdaptationSet, Representation or SubRepresentation. The codec attribute can be set to “3gvo”, in order to indicate the presence of a guided video component and associated CVO information in the streaming DASH content. the method may additionally include delivering a session description protocol (SDP) file for real-time transport protocol (RTP) transmission to the client with a CVO indication via an a = extmap attribute with a name uniform resource (URN) of CVO "urn: 3gpp: video-orientation" which represents a 2-bit granularity for a CVO or "urn: 3gpp: video-orientation: 6" which represents a 6-bit granularity for a CVO of Superior granularity of the CVO information contained in an RTP extension header. [0086] [0086] In another configuration, the operation of adapting continuously transmitted content may additionally include receiving a user-generated content (UGC) video that includes embedded CVO information for the UGC video in a (3GP) format file third generation partnership project file [0087] [0087] In another example, the operation of adapting continuously transmitted content may additionally include storing the CVO data in a third generation partnership project (3GPP) file format (3GP) using CVOSampleEntry for a range of CVO timed multiples in a sample box description of box structure in file format of the International Organization for Standardization (ISO). CVOSampleEntry fields can include a BoxHeader type or size, a Data Reference Index or a Granularity, where the BoxHeader type can be set to a value of 3gvo. [0088] [0088] In another configuration, the operation of receiving device capacity for the client may additionally include exchanging a packet-switched continuous service (PSS) client capacity for the client. The CVO capability may include: a StreamingCVOCapable attribute to indicate whether the customer is a CVO capable receiver of real-time transport protocol (RTP) streaming, a StreamingHighGranularityCVOCapable attribute to indicate whether the customer is a receiver capable of Superior Granularity CVO for continuous RTP transmissions, a ThreeGPCVOCapable attribute to indicate whether the customer is a third generation partner project (3GPP) file format CVO (3GP) receiver or a ThreeGPHighGranularityCVOCapable attribute to indicate if the customer is a receiver with CVO capability of Superior Granularity of 3GP files. [0089] [0089] Another example provides 600 computer circuitry functionality from an operable server to provide continuous streaming with video orientation coordination (CVO), as shown in the flowchart in Figure 10. The functionality can be deployed as a method or the functionality can be performed according to instructions on a machine, where instructions are included in at least one computer-readable medium or a non-transitory machine-readable storage medium. The computer circuitry can be configured to receive a device capability for a customer, as in block 610. The computer circuitry can be additionally configured to modify the content transmitted continuously to the customer based on an inclusion of a attribute of CVO in device capacity, as in block 620. [0090] [0090] In one example, the computer circuitry “configured to modify continuously transmitted content can be additionally configured to correct a rendering orientation of a real-time transport protocol (RTP) continuous transmission or dynamic adaptive continuous transmission continuously transmitted via hypertext transfer protocol (HTTP) (DASH) content for misalignment prior to delivery to the customer when the device capacity for the customer does not include the CVO attribute to indicate that the customer is not terminal aware of orientation. [0091] [0091] In another example, the set of circuits of | A computer configured to modify continuously transmitted content can additionally be configured to deliver a media presentation description (MPD) metadata file for dynamic adaptive streaming transmitted continuously via the Hypertext Transfer Protocol (HTTP) content ( DASH) for the customer with a CVO indication attribute provided in a common codec or element attribute for an AdaptationSet, Representation or SubRepresentation, where the codec attribute is set to “3gvo”, in order to indicate the presence of a oriented video component and associated CVO information in DASH content streamed continuously. Or, the computer circuitry configured to modify continuously transmitted content can be additionally configured to deliver a session description protocol (SDP) file for real-time transport protocol (RTP) transmission to the client. with an indication of CVO through an attribute a = extmap with a uniform resource name (URN) of CVO "urn: 3gpp: video-orientation" representing a 2-bit granularity for a CVO or "urn: 3gpp: video- orientation: 6 "” which represents a 6-bit granularity for a Higher Granularity CVO of the CVO information contained in an RTP extension header. [0092] [0092] In another configuration, the computer circuitry can be additionally configured to store CVO data in a third generation partnership project (3GPP) file format (3GP) using CVOSampleEntry for a range of time-delayed CVO tickets in a box structure sample description box in a file format of the International Organization for Standardization (ISO). CVOSampleEntry fields can include a BoxHeader type or size, a Data Reference Index or a Granularity, where the BoxHeader type can be set to a value of “3gvo”. Or, the computer circuitry can be additionally configured to store CVO data for real-time transport protocol (RTP) streaming in an RTP extension header. [0093] [0093] In another example, the computer circuitry configured to receive the device capacity can be additionally configured to exchange a packet-switched continuous transmission service (PSS) capacity. The CVO attribute can include: a StreamingCVOCapable attribute to indicate whether the customer is a CVO capable receiver of real-time transport protocol (RTP) streaming, a StreamingHighGranularityCVOCapable attribute to indicate whether the customer is a receiver capable of Superior Granularity CVO of continuous RTP transmissions, a ThreeGPCVOCapable attribute to indicate whether the customer is a third-generation partner project (3GPP) file format (3GP) capable CVO receiver or a ThreeGPHighGranularityCVOCapable attribute to indicate if the customer is a receiver with CVO capability of Superior Granularity of 3GP files. [0094] [0094] In another configuration, the computer circuitry configured to modify continuously transmitted content can be additionally configured to perform orientation-aware content adaptation, orientation-aware content selection, orientation-aware transcoding or format conversion orientation aware to correct video orientation misalignment and ensure content playback on the client with correct video orientation. The server may include a third generation partnership project (3GPP) long-term evolution packet switched (LTE) streaming service (PSS) server, a dynamic adaptive transmission through the hypertext transfer protocol server ( HTTP) (DASH) or a multimedia broadcast and multicast service (MBMS) and PSS server based on integrated multimedia subsystem (IMS) (IMS PSS MBMS). [0095] [0095] Figure 11 illustrates an exemplary mobile terminal (MT) (for example, UE or client device) to provide 720 video guidance (CVO) coordination capability, a 710 node and a server [0096] [0096] Again with reference to Figure 11, the mobile terminal 720 can include a processor 722, a transceiver 724 and an orientation sensor 726. The processor can be configured to determine a CVO capacity of MT. The transceiver can be configured to transmit the CVO capacity of MT in a streaming component attribute to the server. [0097] [0097] In one example, the 724 transceiver can additionally be configured to receive a real-time transport protocol (RTP) extension header for a continuous RTP transmission or a project file in 3GP format third generation partnership (3GPP) for a continuous transmission of hypertext transfer protocol (HTTP) or dynamic adaptive continuous transmission over HTTP (DASH). The 722 processor can be additionally configured to: analyze a media presentation description (MPD) metadata file for the 3GP file for a CVO indication attribute or analyze the 3GP file for the embedded CVO information, determine a orientation correction term based on the analyzed CVO information and a current MT orientation and correcting a rendering orientation of the HTTP or DASH streaming for misalignment based on the orientation correction term determined when the MPD metadata file includes the CVO indication attribute and the MT is an orientation aware terminal. Or, the processor can be configured to: analyze a section description protocol (SDP) file for continuous RTP transmission for the CVO indication attribute or analyze the RTP extension header for continuous RTP transmission for the embedded CVO information, determine an orientation correction term based on the analyzed CVO information and the current orientation of the client device and correct an RTP stream rendering orientation for misalignment based on the determined orientation correction term when the SDP file includes the CVO indication attribute and the MT is the orientation aware terminal. Correcting the rendering orientation can compensate for the rotation or rotation of an orientation. [00100] [00100] In another example, the 724 transceiver can be additionally configured to exchange a packet switched streaming service (PSS) client capacity for MT CVO capacity. [00101] [00101] In another configuration, the 722 processor can be additionally configured to: capture a video of user generated content (UGC) with a specified orientation; and incorporate CVO information into the UGC video in a third generation partnership project (3GPP) file format (3GP) file. Transceiver 724 can be additionally configured to upload a 3GP file for hypertext transfer protocol (HTTP) streaming or dynamic adaptive streaming over HTTP (DASH). [00102] [00102] In another example, the 722 processor can be additionally configured to: capture CVO data with a specified orientation; and store the CVO data in a third generation partnership project (3GPP) file format (3GP) file using CVOSampleEntry for a CVO timed range range in a box structure sample description box in file format of the International Organization for Standardization (ISO). CVOSampleEntry fields can include a BoxHeader type or size, a Data Reference Index or a Granularity, where the BoxHeader type can | be set to a value of 3gvo. The transceiver 724 can be additionally configured. to upload a 3GP file for hypertext transfer protocol (HTTP) streaming or dynamic adaptive streaming across HTTP (DASH). [00103] [00103] In another configuration, MT CVO capability can be provided in a third-generation partnership project (3GPP) long-term evolution (LTE) packet switched service (PSP) session, a dynamic adaptive transmission via hypertext transfer (HTTP) session protocol (DASH) or a multimedia broadcast and multicast service (MBMS) and PSS session based on integrated multimedia subsystem (IMS) (IMS PSS MBMS) . The mobile terminal may include the orientation sensor 726 to determine an orientation of the MT. [00104] [00104] Figure 12 provides an exemplary illustration of the mobile terminal (MT), such as a client device, a mobile node, a user equipment (UE), a mobile station (MS), a mobile wireless device, a mobile communication device, a tablet computer, a handset, or other wireless device. The wireless device may include one or more antennas configured to communicate with a node, a macro node, a low power node (LPN) or a transmission station, such as a base station (BS), an evolved node B (eNB), a baseband unit (BBU), a remote radio head (RRH), a remote radio equipment (RRE), a relay station (RS), a radio equipment (RE), a unit remote radio (RRU), a central processing module (CPM), or other type of wireless wide area network (WWAN) access point. The wireless device can be configured to communicate using at least one wireless communication standard, including 3GPP LTE, WiMAX, High Speed Packet Access (HSPA), Bluetooth and WiFi. The wireless device can communicate using separate antennas for each wireless communication standard or shared antennas for multiple wireless communication standards. The wireless device can communicate over a wireless local area network (WLAN), a wireless personal area network (WPAN) and / or a WWAN. [00105] [00105] Figure 12 also provides an illustration of a microphone and one or more speakers that can be used for audio input and output from the wireless device. A display screen can be a liquid crystal display (LCD) screen or other type of display screen, such as an organic light emitting diode (OLED) display. The display screen can be configured as a touch screen. The touch screen can use capacitive, resistive touch screen technology or another type of touch screen technology. An application processor and graphics processor can be attached to the internal memory to provide processing and display capabilities. A non-volatile memory port can also be used to provide a user with input / output options. The non-volatile memory port can also be used to expand the memory capabilities of the wireless device. A keyboard can be integrated with the wireless device or connected wirelessly to the wireless device to provide additional user input. A virtual keyboard can also be provided using the touch screen. [00106] [00106] Various techniques or certain aspects or portions thereof can take the form of program code (ie instructions) embedded in tangible media, such as floppy disks, compact disc read-only memory (CD-ROMs), memory only compact disc reader, non-transitory computer-readable storage medium or any other machine-readable storage medium in which, when program code is loaded on a machine and executed by it, such as a computer, the machine becomes a device to practice the various techniques. The can include hardware, firmware, program code, executable code, computer instructions and / or software. A non-transitory computer-readable storage medium may be a computer-readable storage medium that does not include a signal. In the case of executing program code on programmable computers, the computing device may include a processor, a processor-readable storage medium (including volatile and non-volatile memory and / or storage elements), at least one input device and at least one output device. Volatile and non-volatile memory and / or storage elements can be random access memory (RAM), erasable programmable read-only memory (EPROM), flash drive, optical drive, magnetic hard drive, solid state drive or other means for electronic storage data, the wireless node and device may also include a transceiver module (ie, transceiver), a counter module (ie, counter), a processing module (ie, processor) and / or a clock module (ie clock) or timer module (ie timer). One or more programs that can deploy or use the various techniques described in this document can use an application programming interface (API), reusable controls and the like. Such programs can be implemented in a high-level or object-oriented procedural language to communicate with a computer system. However, the program (s) can be deployed in machine or assembly language, if desired. Either way, the language can be compiled or interpreted and combined with hardware deployments. [00107] [00107] It should be understood that several of the functional units described in this specification have been identified as modules, in order to emphasize more particularly their independence of implementation. For example, a module can be deployed as a hardware circuit comprising very large-scale custom integration circuits (VLSI) or gate arrays, shelf semiconductors such as logic chips, transistors or other discrete components. A module can also be deployed on programmable hardware devices, such as programmable field gate arrays, programmable matrix logic, programmable logic devices or the like. [00108] [00108] The modules can also be implemented in software for execution through various types of processors. An identified module of executable code can, for example, comprise one or more physical or logical blocks of computer instructions, which can, for example, be organized as an object, procedure or function. However, the executables of an identified module do not need to be physically located together, but can comprise separate instructions stored in different locations that, when joined together logically, comprise the module and achieve the purpose for the module. [00109] [00109] An executable code module can be a single instruction or many instructions and can even be distributed among several different code segments, between different programs and through different memory devices. Similarly, operational data can be identified and illustrated in this document in modules and can be incorporated in any suitable form and organized into any suitable type of data structure. Operational data can be collected as a single data set or can be distributed to different locations, including different storage devices, and can exist, at least partially, only as electronic signals on a system or network. The modules can be passive or active, including agents operable to perform desired functions. [00110] [00110] Throughout this specification, reference is made to "an example" or "" example ", which means that a specific feature, structure or feature described in relation to the example is included in at least one embodiment of the present invention. , the occurrence of the phrases "in an example" or the word "example" in various places throughout this specification do not necessarily refer to the same modality. [00111] [00111] As used in this document, a plurality of items, structural elements, composition elements and / or materials can be presented in a common list for convenience. However, such lists must be interpreted as if each member of the list was individually identified as a unique and separate member. Accordingly, no individual member of such a list should be considered as a de facto equivalent of any other member of the same list solely on the basis of its presentation in a common group without indication to the contrary. In addition, various embodiments and examples of the present invention can be mentioned in this document together with alternatives for the various components thereof. It is understood that such modalities, examples and alternatives should not be interpreted as real equivalents of each other, but should be interpreted as separate and autonomous representations of the present invention. [00112] [00112] Furthermore, the resources, structures or characteristics described can be combined in any suitable way in one or more modalities. In the description that follows, several specific details are provided, such as template examples, distances, network examples, etc., to provide a complete understanding of the modalities of the invention. One skilled in the relevant technique will recognize, however, that the invention can be practiced without one or more of the specific details or with other methods, components, templates, etc. In other cases, well-known structures, materials or operations are not shown or described in detail to avoid omitting aspects of the invention. [00113] [00113] Although the aforementioned examples are illustrative of the principles of the present invention in one or more specific applications, it will be evident to the elements of common skill in the technique that various modifications in the form, use and implementation details can be carried out without the exercise of exercise faculty of invention and without departing from the principles and concepts of the invention. Consequently, the invention is not intended to be limited, except for the claims set forth below.
权利要求:
Claims (24) [1] 1. Operable server to provide continuous transmission with video orientation coordination (CVO), characterized by having a computer circuit system configured to: receive a device capacity change message with a CVO attribute for a customer; and modify content on the server to be transmitted “continuously to the client based on the CVO attribute received on the device capacity. [2] 2. Computer circuit system according to claim 1, characterized by the fact that the computer circuit system is configured to modify the content transmitted continuously and is additionally configured to: correct a rendering orientation of a continuous transmission of real-time transport protocol (RTP) or dynamic adaptive streaming transmitting continuously via hypertext transfer protocol (HTTP) content (DASH) for misalignment prior to delivery to the customer when device capacity for the customer does not include the CVO attribute to indicate that the customer is not terminally aware of guidance. [3] 3. Computer circuit system according to claim 1, characterized by the fact that the computer circuit system is configured to modify the content transmitted continuously and is additionally configured to: deliver a media presentation description (MPD) metadata file for dynamic adaptive streaming that is continuously streamed through the content of the Hypertext Transfer Protocol (HTTP) (DASH) from the server to the client with a referral attribute CVO provided in a common codec attribute or element for an AdaptationSet, Representation or SubRepresentation, where the codec attribute is set to “3gvo”, in order to indicate the presence of a guided video component and associated CVO information in the content DASH transmitted continuously; and deliver a session description protocol (SDP) file for real-time transport protocol (RTP) transmission to the customer with a CVO indication through an a = extmap tribute with a uniform resource name (URN) of CVO "urn: 3gpp: video-orientation" which represents a 2-bit granularity for a CVO or "urn: 3gpp: video-orientation: 6" which represents a 6 & bit granularity for a Higher-Grain CVO of CVO information contained in an RTP extension header. [4] 4, Computer circuit system according to claim 1, characterized by the fact that the computer circuit system is configured to: store CVO data in a third-party partnership project file (3GP) file generation (3GPP) with the use of CVOSampleEntry for a range of time-delayed CVO multiples in a box structure sample description box in the International Organization for Standardization (ISO) file format, where the CVOSampleEntry fields include a type or BoxHeader size, a Data Reference Index or a Granularity and where the BoxHeader type is set to a value of “3gvo”; or store CVo data for real-time transport protocol (RTP) streaming in an RTP extension header. [5] 5. Computer circuit system according to claim 1, characterized by the fact that the computer circuit system configured to receive the device capacity and is additionally configured to: exchange a switched continuous transmission service capacity per packet (PSS), where the CVO attribute includes: a StreamingCVOCapable attribute to indicate whether the client is a CVO capable receiver of real-time transport protocol (RTP) streaming, a: StreamingHighGranularityCVOCapable attribute to indicate whether the client is a receiver with CVO capability of Superior Granularity of continuous RTP transmissions, a ThreeGPCVOCapable attribute to indicate whether the client is a CVO capable receiver of files in 3GP format of a third generation partnership project ( 3GPP), or a ThreeGPHighGranularityCVOCapable attribute to indicate whether the customer is a Granularid CVO capable receiver Superior of 3GP files. [6] 6. Computer circuit system, according to claim 1, characterized by the fact that the computer circuit system configured to modify the continuously transmitted content is additionally configured to: perform orientation-aware content adaptation, selection of aware content orientation, orientation aware transcoding, or orientation aware format conversion to correct video orientation misalignment and ensure content playback on the client with correct video orientation. [7] 7. Computer circuit system according to claim 1, characterized by the fact that the server includes a long-term evolution (LTE) packet-switched service (PSS) server for a third-party partnership project generation, a long-term evolution (LTE) packet-switched streaming service (PSS) from a third generation partnership project (3GPP) or a multimedia and PSS multicast and broadcast service server (MBMS) based on integrated multimedia subsystem (IMS) (IMS PSS MBNMS). [8] 8. "Mobile (MT) terminal to provide video orientation coordination (CVO) capability, characterized by comprising: a processor to determine an MT CVO capability; and a transceiver to transmit MT CVO capability on an attribute streaming component to a server. [9] 9. Mobile "terminal, according to claim 8, characterized by the fact that: the transceiver is additionally configured to: receive a real-time transport protocol (RTP) extension header for continuous RTP transmission or a file in file format (3GP) of a third generation partnership project (3GPP) for a continuous transmission of hypertext transfer protocol (HTTP) or dynamic adaptive continuous transmission over HTTP (DASH); and the processor being additionally , configured to: analyze a media presentation description (MPD) metadata file for the 3GP file for a CVO referral attribute or analyze the 3GP file for the embedded CVO information, determine an orientation correction term with based on the analyzed CVo information and a current MT orientation, and correct a rendering orientation of the HTTP or DASH streaming for misalignment based on the term orientation correction determined when the MPD metadata file includes the CVO indication attribute and the MT is an orientation aware terminal; or analyze a section description protocol file for continuous RTP transmission for the CVO indication attribute, or analyze the RTP extension header for continuous RTP transmission for embedded CVO information, determine a correction term for guidance based on the analyzed CVo information and current MT guidance, and correct a rendering orientation of the continuous RTP transmission for misalignment based on the orientation correction term determined when the SDP file includes the CVO indication attribute and MT is the orientation aware terminal, where correcting the rendering orientation compensates for the rotation or rotation of an orientation. [10] 10. Mobile terminal, according to claim 8, characterized by the fact that: the transceiver is additionally configured for: receive a media presentation description (MPD) metadata file from the server for dynamic adaptive streaming transmitting continuously via hypertext transfer protocol (HTTP) content (DASH) with a CVO referral attribute provided in an attribute common codec or element for an AdaptationSet, Representation or SubRepresentation, where theThe codec attribute is set to “3gvo”, in order to indicate the presence of a oriented video component and the associated CVO information in the continuously transmitted DASH content ; and since the processor is, additionally, configured to: analyze the MPD metadata file for the CVO indication attribute; and modifying a rendering orientation of an HTTP stream or DASH content streamed continuously for misalignment when the MPD metadata file includes the CVO indication attribute and the MT is an orientation aware endpoint. [11] 11. Mobile terminal, according to claim 8, characterized by the fact that: the transceiver is additionally configured to: receive a file in file format (3GP) of a third generation partnership project (3GPP) that includes from CVOSampleEntry for a range of CVO timescales in a box structure sample description box in the International Organization for Standardization (ISO) file format, where the CVOSampleEntry fields include a BoxHeader type or size, a reference index of data or a Granularity and where the BoxHeader type is set to a value of 3gvo; and the processor is additionally configured to: analyze the 3GP file for CVOSampleEntry; and modify a rendering orientation of a hypertext transfer protocol (HTTP) streaming or dynamic adaptive streaming streaming continuously over HTTP (DASH) content for misalignment when the 3GP file includes CVOSampleEntry and MT is a terminal aware of orientation. [12] 12. Mobile terminal, according to claim 8, characterized by the fact that: the transceiver is additionally configured to: exchange a packet switched continuous transmission service (PSS) client capacity for MT CVO capacity, where MT CVO capability includes: a StreamingCVOCapable attribute to indicate whether the customer is a CVO capable receiver for real-time transport protocol (RTP) streaming, a ThreeGPHighGranularityCVOCapable attribute to indicate whether the client is a capable receiver for Superior Granularity CVO of 3GP files; a ThreeGPCVOCapable attribute to indicate whether the customer is a CVO-capable recipient of third generation partnership project (3GPP) file format (3GP), or a ThreeGPHighGranularityCVOCapable attribute to indicate whether the customer is a recipient capable of Superior Granularity CVO for 3GP files. [13] 13. Mobile terminal, according to claim 8, characterized by the fact that: since the processor is additionally configured to: capture a video of user generated content (UGC) with a specified orientation; and incorporate CVO information for the UGC video into a file (3GP) in a third generation partnership project (3GPP) file format; and the transceiver is additionally configured to: upload a 3GP file for a hypertext transfer protocol (HTTP) streaming or dynamic adaptive streaming across HTTP (DASH). [14] 14. Mobile terminal, according to claim 8, characterized by the fact that: since the processor is additionally configured to: capture CVo data with a specified orientation; and store the CVO data in a third generation partnership project (3GPP) file format (3GP) file using CVOSampleEntry for a CVO timed range range in a box structure sample description box in file format of the International Organization for Standardization (ISO), where the CVOSampleEntry fields include a BoxHeader type or size, a Data Reference Index or a Granularity and where the BoxHeader type is set to a value of 3gvo ; and the transceiver is additionally configured to: upload a 3GP file for hypertext transfer protocol (HTTP) streaming or dynamic adaptive streaming over HTTP (DASH). [15] 15. Mobile terminal, according to claim 8, characterized by the fact that the MT CVO capacity is provided in a long-term evolution (LTE) packet-switched continuous transmission service (PSS) session. third generation partnership (3GPP), dynamic adaptive transmission via hypertext transfer session protocol (HTTP) (DASH) or an integrated multimedia subsystem (IMS) based on multimedia broadcast and multicast service session and PSS (MBMS) (IMS PSS MBNS). [16] 16. Mobile terminal according to claim 8, characterized by the fact that it additionally comprises: an orientation sensor to determine an orientation of the MT. [17] 17. Mobile terminal according to claim 8, characterized by the fact that the mobile terminal includes a user equipment (UE) or a mobile station (MS) and the mobile terminal includes an antenna, a camera, a video screen touchscreen, a speaker, a microphone, a graphics processor, an application processor, internal memory, or a non-volatile memory port. [18] 18. Method for signaling a video orientation coordination capability (CVO) from a mobile terminal (MT) on a server, characterized by comprising: receiving a device capacity for a client on the server via a device capacity change message; identify when the device capability includes a CVO attribute; and adapt content that is continuously transmitted to the customer based on an inclusion of the CVO attribute. [19] 19. Method, according to claim 18, characterized by the fact that adapting the content transmitted continuously comprises, additionally: modifying a display orientation of a hypertext transfer protocol (HTTP) continuous transmission, dynamic adaptive continuous transmission through HTTP (DASH) or real-time transport protocol (RTP) streaming for misalignment when the device capability for the client does not include the CVO attribute which indicates that the client is not an orientation aware terminal; or incorporate a CVOo referral attribute in a media presentation description metadata (MPD) file or a section description protocol file when the device capability for the customer includes the CVO attribute that indicates that the customer is an orientation aware terminal to modify the display orientation of a continuous stream. [20] 20. Method, according to claim 18, characterized by the fact that adapting the content transmitted continuously comprises additionally: | 13/16 | deliver a media presentation description (MPD) metadata file for a dynamic adaptive streaming transmitted continuously through the hypertext transfer protocol (HTTP) (DASH) content to the client with a CVO referral attribute provided in a common codec attribute or element for an AdaptationSet, Representation or SubRepresentation, where the codec attribute is set to “3gvo”, in order to indicate the presence of a guided video component and associated CVO information in the broadcast DASH content to be continued; and deliver a session description protocol (SDP) file for real-time transport protocol (RTP) streaming to the customer with a CVO indication via an az = extmap attribute with a uniform resource name (URN) of CVO "urn: 3gpp: video-orientation" which represents a 2-bit granularity for a CVO or "urn: 3gpp: video-orientation: 6" which represents a 6-bit granularity for a CVO of Higher Granularity of CVO information contained in an RTP extension header. [21] 21. Method, according to claim 18, characterized by the fact that adapting the content transmitted continuously comprises additionally: receiving a video of user-generated content (UGC) that includes CVO information embedded in the UGC video in a format file of file (3GP) of a third generation partnership project (3GPP), in which the 3GP file uses: a CVOSampleEntry for a range of CVO timed data in a box structure sample description box in International Organization file format. Standardization (ISO), in which the CVOSampleEntry fields include a BoxHeader type or size, a Data Reference Index or a Granularity and in which the BoxHeader type is set to a value of 3gvo; or a common codec attribute or element for an AdaptationSet, Representation or SubRepresentation of a media presentation description (MPD), where the codec attribute is set to "" 3gvo ", in order to indicate the presence of a video component oriented and associated CVO information in a dynamic adaptive continuous transmission transmitted continuously through Hypertext Transfer Protocol (HTTP) (DASH) content. [22] 22. Method, according to claim 18, characterized by the fact that adapting the content transmitted continuously comprises, additionally: storing CVO data in a file in file format (3GP) of a third generation partnership project (3GPP) with the use of CVOSampleEntry for a range of CVO timed multiples in a box structure sample description box in International Organization file format for Standardization (ISO), in which the CVOSampleEntry fields include a BoxHeader type or size, a Data Reference Index or a Granularity and where the BoxHeader type is set to a value of “3gvo”; or store CVOo data for real-time transport protocol (RTP) streaming in an RTP extension header. [23] 23. Method, according to claim 18, characterized by the fact that receiving the device capacity for the customer additionally comprises: exchanging a customer capacity of packet switched continuous transmission service (PSS) for the customer, in which CVO capability includes: a StreamingCVOCapable attribute to indicate whether the client is a CVO capable receiver for real-time transport protocol (RTP) streaming, a ThreeGPHighGranularityCVOCapable attribute to indicate whether the client is a CVO capable receiver Superior Granularity of 3GP files, a ThreeGPCVOCapable attribute to indicate whether the customer is a CVO-capable recipient of third generation partnership project (3GPP) file format (3GPP), or a ThreeGPHighGranularityCVOCapable attribute to indicate whether the client is a receiver with CVO capability for Superior 3GP files. [24] 24. System with logic characterized by implementing the method as defined in the claim 18.
类似技术:
公开号 | 公开日 | 专利标题 JP6455741B2|2019-01-23|Streaming with video orientation adjustment | TWI568252B|2017-01-21|Streaming with coordination of video orientation | JP6490778B2|2019-03-27|Multimedia adaptation based on video orientation US20140095668A1|2014-04-03|Method for seamless unicast-broadcast switching during dash-formatted content streaming US20140019635A1|2014-01-16|Operation and architecture for dash streaming clients WO2016205674A1|2016-12-22|Dynamic adaptive contribution streaming
同族专利:
公开号 | 公开日 EP2912851B1|2020-04-22| JP2017143528A|2017-08-17| JP6455741B2|2019-01-23| US20160088053A1|2016-03-24| WO2014066887A1|2014-05-01| KR101843328B1|2018-03-28| EP3139698A1|2017-03-08| CN104704844A|2015-06-10| US9438658B2|2016-09-06| CA2880588A1|2014-05-01| KR20170015563A|2017-02-08| AU2017203829A1|2017-06-22| KR20150048216A|2015-05-06| AU2017203829B2|2018-09-06| AU2016200390B2|2017-03-23| AU2016200390A1|2016-02-11| JP6105741B2|2017-03-29| US20150089074A1|2015-03-26| US20160352799A1|2016-12-01| CA2880588C|2020-03-10| KR101703268B1|2017-02-06| JP2016500987A|2016-01-14| EP2912851A4|2016-05-25| AU2013334019A1|2015-02-12| HK1207770A1|2016-02-05| AU2013334019B2|2015-11-12| US9215262B2|2015-12-15| US10432692B2|2019-10-01| EP3139698B1|2020-05-06| CN104704844B|2019-05-21| EP2912851A1|2015-09-02|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 AU5884300A|1999-06-23|2001-01-09|Infolio, Inc.|System and operating method for an internet-based content adaptation service| SE522878C2|2000-06-16|2004-03-16|Ericsson Telefon Ab L M|Data communication systems| JP4189627B2|2002-04-24|2008-12-03|富士フイルム株式会社|Image server and image display system| US7065641B2|2002-06-13|2006-06-20|Intel Corporation|Weighted processor selection apparatus and method for use in multiprocessor systems| US7085590B2|2003-12-31|2006-08-01|Sony Ericsson Mobile Communications Ab|Mobile terminal with ergonomic imaging functions| FI20040944A0|2004-07-07|2004-07-07|Nokia Corp|Content communication management in a communications system| US7721204B2|2004-07-29|2010-05-18|Xerox Corporation|Client dependent image processing for browser-based image document viewer for handheld client devices| KR100631601B1|2004-10-21|2006-10-09|엘지전자 주식회사|Video streaming service method of portable terminal| US8819143B2|2005-05-31|2014-08-26|Flash Networks Ltd.|Presentation layer adaptation in multimedia messaging| US20070186005A1|2005-09-01|2007-08-09|Nokia Corporation|Method to embedding SVG content into ISO base media file format for progressive downloading and streaming of rich media content| JP4567646B2|2006-09-25|2010-10-20|シャープ株式会社|Video / audio playback portable terminal, video / audio distribution terminal, and system| US7984177B2|2007-04-30|2011-07-19|Vixs Systems, Inc.|Multimedia client/server system with adjustable packet size and methods for use therewith| US8595186B1|2007-06-06|2013-11-26|Plusmo LLC|System and method for building and delivering mobile widgets| GB0721475D0|2007-11-01|2007-12-12|Asquith Anthony|Virtual buttons enabled by embedded inertial sensors| US8467991B2|2008-06-20|2013-06-18|Microsoft Corporation|Data services based on gesture and location information of device| US8706910B2|2008-10-28|2014-04-22|Panzura, Inc.|Dynamically adaptive network-based data processing system and method| US8600391B2|2008-11-24|2013-12-03|Ringcentral, Inc.|Call management for location-aware mobile devices| US9281847B2|2009-02-27|2016-03-08|Qualcomm Incorporated|Mobile reception of digital video broadcasting—terrestrial services| WO2010099577A1|2009-03-04|2010-09-10|The University Of Queensland|Cancer biomarkers and uses therefor| KR20110138276A|2009-04-09|2011-12-26|노키아 코포레이션|Systems, methods and apparatuses for media file streaming| CN101540871B|2009-04-27|2011-05-11|中兴通讯股份有限公司|Method and terminal for synchronously recording sounds and images of opposite ends based on circuit domain video telephone| WO2011039617A1|2009-09-29|2011-04-07|Nokia Corporation|System, method and apparatus for dynamic media file streaming| US9124804B2|2010-03-22|2015-09-01|Microsoft Technology Licensing, Llc|Using accelerometer information for determining orientation of pictures and video images| US8521899B2|2010-05-05|2013-08-27|Intel Corporation|Multi-out media distribution system and method| US8514331B2|2010-06-08|2013-08-20|Stmicroelectronics, Inc.|De-rotation adaptor and method for enabling interface of handheld multi-media device with external display| US9131033B2|2010-07-20|2015-09-08|Qualcomm Incoporated|Providing sequence data sets for streaming video data| US9596447B2|2010-07-21|2017-03-14|Qualcomm Incorporated|Providing frame packing type information for video coding| US20120195196A1|2010-08-11|2012-08-02|Rajat Ghai|SYSTEM AND METHOD FOR QoS CONTROL OF IP FLOWS IN MOBILE NETWORKS| US20120233345A1|2010-09-10|2012-09-13|Nokia Corporation|Method and apparatus for adaptive streaming| US8918645B2|2010-09-24|2014-12-23|Amazon Technologies, Inc.|Content selection and delivery for random devices| WO2012046487A1|2010-10-05|2012-04-12|シャープ株式会社|Content reproduction device, content delivery system, synchronization method for content reproduction device, control program, and recording medium| EP2442562B1|2010-10-12|2017-08-02|BlackBerry Limited|Method and apparatus for image orientation indication and correction| JP2012099890A|2010-10-29|2012-05-24|Sony Corp|Image processing device, image processing method, and image processing system| JP5811602B2|2010-12-16|2015-11-11|ソニー株式会社|Image generation apparatus, program, image display system, and image display apparatus| US8675577B2|2010-12-20|2014-03-18|Intel Corporation|Signaling techniques for a multimedia-aware radio and network adaptation| US9418353B2|2010-12-20|2016-08-16|Akamai Technologies, Inc.|Methods and systems for delivering content to differentiated client devices| US8441955B2|2011-01-24|2013-05-14|Tektronix, Inc.|Determining mobile video quality of experience and impact of video transcoding| US20120278495A1|2011-04-26|2012-11-01|Research In Motion Limited|Representation grouping for http streaming| KR101784316B1|2011-05-31|2017-10-12|삼성전자주식회사|Method for providing multi-angle broadcast service and display apparatus, mobile device using the same| US8867806B2|2011-08-01|2014-10-21|Impac Medical Systems, Inc.|Method and apparatus for correction of errors in surfaces| CN103858457B|2011-08-01|2018-11-13|英特尔公司|Multi-hop single-sign-on for identity provider roaming/agency| US9253233B2|2011-08-31|2016-02-02|Qualcomm Incorporated|Switch signaling methods providing improved switching between representations for adaptive HTTP streaming| US20130060881A1|2011-09-01|2013-03-07|Mp4Sls Pte Ltd|Communication device and method for receiving media data| TW201322743A|2011-11-18|2013-06-01|Onlive Inc|Graphical user interface, system and method for controlling a video stream| US9438818B2|2012-06-20|2016-09-06|Qualcomm Incorporated|Device and method for multimedia communications with picture orientation information| WO2014012015A2|2012-07-13|2014-01-16|Vid Scale, Inc.|Operation and architecture for dash streaming clients| US9357272B2|2012-08-03|2016-05-31|Intel Corporation|Device orientation capability exchange signaling and server adaptation of multimedia content in response to device orientation| KR101843328B1|2012-10-26|2018-03-28|인텔 코포레이션|Streaming with coordination of video orientation | CN111225256A|2012-10-26|2020-06-02|苹果公司|Multimedia adaptation terminal, server, method and device based on video orientation|EP2875642B1|2012-08-22|2017-11-01|Huawei Technologies Co., Ltd.|Carriage of iso-bmff event boxes in an mpeg-2 transport stream| KR101843328B1|2012-10-26|2018-03-28|인텔 코포레이션|Streaming with coordination of video orientation | CN111225256A|2012-10-26|2020-06-02|苹果公司|Multimedia adaptation terminal, server, method and device based on video orientation| JP6492006B2|2013-07-02|2019-03-27|サターン ライセンシング エルエルシーSaturn Licensing LLC|Content supply apparatus, content supply method, program, and content supply system| KR102154800B1|2014-01-10|2020-09-10|삼성전자주식회사|Data streaming method of electronic apparatus and electronic apparatus thereof| GB2558086B|2014-03-25|2019-02-20|Canon Kk|Methods, devices, and computer programs for improving streaming of partitioned timed media data| EP3035326B1|2014-12-19|2019-07-17|Alcatel Lucent|Encoding, transmission , decoding and displaying of oriented images| CN106034262B|2015-03-13|2021-01-22|中兴通讯股份有限公司|Adaptive streaming media processing method and device| US10681107B2|2015-06-16|2020-06-09|Apple Inc.|Adaptive video content for cellular communication| US9681111B1|2015-10-22|2017-06-13|Gopro, Inc.|Apparatus and methods for embedding metadata into video stream| US10593028B2|2015-12-03|2020-03-17|Samsung Electronics Co., Ltd.|Method and apparatus for view-dependent tone mapping of virtual reality images| US10097608B2|2015-12-26|2018-10-09|Intel Corporation|Technologies for wireless transmission of digital media| US9723258B2|2015-12-30|2017-08-01|Roku, Inc.|Orientation adjustment for casting videos| US10486061B2|2016-03-25|2019-11-26|Zero Latency Pty Ltd.|Interference damping for continuous game play| US10421012B2|2016-03-25|2019-09-24|Zero Latency PTY LTD|System and method for tracking using multiple slave servers and a master server| US10717001B2|2016-03-25|2020-07-21|Zero Latency PTY LTD|System and method for saving tracked data in the game server for replay, review and training| US10071306B2|2016-03-25|2018-09-11|Zero Latency PTY LTD|System and method for determining orientation using tracking cameras and inertial measurements| US9916496B2|2016-03-25|2018-03-13|Zero Latency PTY LTD|Systems and methods for operating a virtual reality environment using colored marker lights attached to game objects| WO2017171953A1|2016-04-01|2017-10-05|Intel IP Corporation|User equipmentsupport mode and id support| CN107294736B|2016-04-05|2020-06-30|中国移动通信有限公司研究院|Multimedia data processing method and terminal| US10751609B2|2016-08-12|2020-08-25|Zero Latency PTY LTD|Mapping arena movements into a 3-D virtual world| FR3070087B1|2017-08-11|2019-08-09|Renault S.A.S.|METHOD FOR DISPLAYING IMAGES ON A MOTOR VEHICLE SCREEN| KR102024642B1|2017-11-29|2019-09-24|전자부품연구원|Live Steaming Server device and operating method thereof| WO2019167634A1|2018-02-28|2019-09-06|ソニー株式会社|Content delivery control device, content delivery control method, program, and content delivery system| WO2019167633A1|2018-02-28|2019-09-06|ソニー株式会社|Content delivery control device, content delivery control method, program, and content delivery system|
法律状态:
2020-08-18| B15K| Others concerning applications: alteration of classification|Free format text: A CLASSIFICACAO ANTERIOR ERA: H04N 21/25 Ipc: H04N 21/2343 (2011.01), H04N 21/258 (2011.01), H04 | 2020-08-25| B25C| Requirement related to requested transfer of rights|Owner name: INTEL CORPORATION (US) | 2020-09-01| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2020-12-15| B25A| Requested transfer of rights approved|Owner name: APPLE INC. (US) | 2021-11-03| B350| Update of information on the portal [chapter 15.35 patent gazette]| 2022-03-03| B07A| Application suspended after technical examination (opinion) [chapter 7.1 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201261719241P| true| 2012-10-26|2012-10-26| US61/719,241|2012-10-26| US201361753914P| true| 2013-01-17|2013-01-17| US61/753,914|2013-01-17| US201361841230P| true| 2013-06-28|2013-06-28| US61/841,230|2013-06-28| PCT/US2013/067054|WO2014066887A1|2012-10-26|2013-10-28|Streaming with coordination of video orientation | 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|